``
Watch our Video

Future Charity Part 1 - Charities and AI in 2026

Charity AI in 2026 - where charities are today, the key issues they face, public opinion, risks and what they say they need most

Charity Sector AI: Where We Are Now — and Why It Matters

AI is no longer something that is “coming” to the charity sector. It is already here.

Across the UK, charities are quietly using AI every day to write bids, draft communications, analyse data and manage workload. Much of this use is informal, ungoverned and happening without shared understanding or leadership. In many cases, trustees do not realise it is happening at all. This gap between use and oversight creates both opportunity and real risk.

The Future Charity Report – Part 1 brings together the strongest evidence yet on Charity Sector AI adoption. Drawing on national surveys, real‑time benchmarking from thousands of charities and trusted external research, it provides a clear and grounded picture of where the sector stands today.

We are very grateful to the GSR Foundation whose funding makes our work possible.  We would also like to thank Microsoft and the people there whom we work with and our corporate partners, who fund and support our work, including those who allow us to access their in-depth technology expertise, often on a pro bono basis.

gsr-foundation-logo

Key Findings: Charity Sector AI in 2026

AI is Now Widely Used — but not strategically and often lacking oversight

“AI is being used informally, without anyone really talking about it.”

Most UK charities are already using AI in some form, typically through individual staff or volunteers using tools like Microsoft Copilot. However, it is not often being seen within the strategic context of the huge impact it will have on Society. Only a very small minority of charities are deploying AI with clear organisational oversight, agreed policies or trustee‑level ownership.

This creates a growing gap between what is happening in practice and what boards believe is happening, even though trustees remain legally responsible for AI‑related risks such as data protection, safeguarding and bias.

Learning: AI use has continued to accelerate but governance has not kept pace.

Most Charities Are Still at the Start of Their AI journey

“Our organisation understands the importance of keeping up with AI, otherwise we risk being left behind.”

Around two thirds of charities describe themselves as exploring or experimenting with AI. Fewer than one in four have approved tools, policies or training in place, and fully embedded use remains uncommon.

Despite this early stage, pressure to engage is strong. Many charities feel they cannot afford to ignore AI, even if they are unsure how to proceed safely.

Learning: Momentum is building faster than confidence and we risk loss of public trust unless we act to significantly improve governance oversight of AI by trustees.

AI is Seen as an Opportunity but a Risky One

“It feels like standing at the base of a mountain and not knowing which path to take.”

Just over half of charities strongly agree that AI could benefit their organisation. At the same time, concern is widespread. Data protection is the single biggest worry, followed by safeguarding, ethics and reputational risk.

A notable minority of charities still believe AI is not relevant to their work at all, including some grant makers, highlighting how uneven understanding remains across the sector.

Learning: Interest is high, but fear and uncertainty are holding many charities back.

Operational Controls are Improving Faster than Governance

“We haven’t formally discussed AI at a Trustee meeting yet.”

Charity Excellence system benchmarking data shows that practical risk controls — such as data protection measures, human review of AI‑generated funding bids, and safeguarding in AI‑enabled meetings — are improving.

However, all three board‑level AI governance controls remain rated Red across the sector:

  • Strategic assessment of AI’s impact
  • Clear trustee or committee responsibility for AI
  • Organisation‑wide training and compliance

Learning: Charities are managing immediate risks, but struggling to embed AI into governance and management.

Public Trust is Cautious, Conditional and Context‑dependent

Public attitudes towards Charity Sector AI are not hostile, but they are cautious. Around a third of people feel positive, a quarter feel negative, and the rest are unsure. Support is strongest where AI is used to:

  • Protect charitable funds (e.g. fraud detection).
  • Improve efficiency or back‑office work.

Trust drops sharply when AI is perceived to:

  • Influence decisions about who receives support.
  • Replace human judgement.
  • Use sensitive personal data without transparency.

Learning: The public is not against our use of AI and its trust is, at least in part dependent on the type of organisations using AI – and we are very trusted. However, the Public is cautious and that trust conditional. The current extensive charity use of AI without visible, effective governance, clear human control, and honest communication creates a risk of our own making. That must change because trust is critical for charities and will be even more so in an AI enabled world of slop, scams and fake news. We need intentional transparency: being explicit about why AI is used, where humans remain in charge, how data is protected, and where AI will not be used.

AI Imagery Presents a Specific Trust Risk

Both charities and the public express unease about AI‑generated imagery. Many charities have tried it and then stopped, citing ethical, reputational or authenticity concerns.

Research shows strong public support for authentic imagery, with lower acceptance where AI images appear realistic or emotionally manipulative, especially in sensitive contexts.

Learning: AI imagery needs careful, values‑led decision‑making and transparency.

Resistance is Usually Thoughtful, not Anti‑technology

“It’s not really resistance. It’s that we don’t know enough yet.”

Where charities hesitate, it is rarely due to blanket opposition to AI. The main barriers identified are:

  • Lack of understanding.
  • Fear of unintended harm.
  • Data protection and safeguarding risks.
  • Limited time, funding and capacity.

Charities are clear about what they need next: plain‑English training, practical guidance, ready‑to‑use policies and funding to build capacity.

Learning: Confidence will come from support, not pressure.

Resources. Charity Excellence provides the following free AI support.

  • Training. Trustees and Management can also complete the free Charity Excellence Learning Trustees and Management online AI courses.
  • Best Practice. Charity Excellence AI Ready runs automatically when completing Health Check questionnaires and connect users to a wide range of AI help, resources, toolkits and policies.
  • Policies.  60+ policies can be downloaded by logging in to Charity Excellence; where appropriate, policies have been updated to reflect AI requirements.
  • Funding. There is very little UK AI funding for non-profits.  We will launch a tech and innovation funding list soon of funders who may well be open to an AI application.

What This Means for Charities Now

“People are already using it day to day, but it’s not really acknowledged and there are no guardrails.”

The evidence shows that Charity Sector AI is already part of day‑to‑day reality but the sector is at risk of moving forward without shared standards, confidence or trust.

Charities that succeed will be those that:

  • Keep humans in control.
  • Are transparent about why and how AI is used.
  • Protect data and safeguard people.
  • Support trustees, staff and volunteers with practical guidance.

Download the Full Report

A detailed, evidence‑based analysis of Charity Sector AI use, attitudes, risks and support needs in 2026.

👉Charity Excellence AI Report April 2026.

Find the Funding and Free Help Your Charity Needs

A registered charity ourselves, the CEF works for any non profit, not just charities.

Plus, 60+ policies, 8 online health checks, the Quality Mark and the huge resource base. Our AI Ready programme and Charity Excellence Learning free online AI training courses, give non profits everything they need to make effective use of AI and stay safe.

Find Funding, Free Help & Resources - Everything Is Free.

Register Now!

AI Reports, Surveys and Data Used

Charity Excellence data.

  1. AI Survey March 2026 (where we are now March 2026).
  2. Grant Makers Survey 2026 (AI Sections – bid writing).
  3. Charity Data Store – Data Extracts 2025 and 2026 (14 AI Ready metrics).
  4. AI Survey 2024 (charity sector use of AI).

We are also very happy to recognise the work of others that was used in creating the report.

  1. Charity Digital Skills Report 2025: Zoe Amar Digital, Media Trust, CAST
  2. Trust in Charities 2025: Charity Commission for England and Wales
  3. Charities and Artificial Intelligence – Trustee Guidance: Charity Commission for England and Wales
  4. AI adoption among charities shows no sign of slowing down: Zoe Amar, Civil Society, March 2026
  5. CAST Charity AI Survey 2026.
  6. Charity Tracker Public Perceptions research, a UK‑wide survey of 3,000 adults conducted in December 2025 and reported by Civil Society in February 2026, which focused specifically on charities’ use of AI.
  7. Charities Aid Foundation (CAF) international research into public views on charities using AI, based on over 6,000 respondents across 10 countries, with additional UK focus groups.
  8. UK Government Public Attitudes to Data and AI Tracker Survey (Wave 4), published by DSIT in December 2024, which provides context on public trust in AI and data use (but is not charity‑specific).
  9. University of East Anglia – Public Perceptions of AI Imagery.
  10. The Saltways, AI‑Generated Imagery in the Charity Sector, February 2026.

Charity AI Survey 2026 Methodology

The methodology used is detailed at the bottom in in the downloadable full Charity AI Survey 2026 report.

Ethics Note

The Charity Excellence AI Survey Agent was used in adducing and analysing data but under the direction and control of a human.

Charity AI 2026 FAQs

How widely is AI currently used across the charity sector?

AI is already widely used by UK charities, most often through individual staff or volunteers using tools such as generative AI. However, this use is rarely strategic and is often informal, with limited organisational oversight or trustee involvement.

Why is there concern about how AI is being used by charities?

The main concern is the growing gap between day‑to‑day AI use and governance. Trustees remain legally responsible for risks such as data protection, safeguarding and bias, yet many boards are unaware of how AI is already being used within their organisations.

Where are most charities in their AI journey?

Most charities are still at an early stage. Around two thirds describe themselves as exploring or experimenting with AI, while fewer than one in four have approved tools, policies or training in place. Fully embedded use remains uncommon.

Do charities see AI as an opportunity or a threat?

Most charities see AI as both an opportunity and a risk. Just over half strongly agree AI could benefit their organisation, but concern is widespread. Data protection is the single biggest worry, followed by safeguarding, ethics and reputational risk.

What gaps remain at board and trustee level?

Key governance controls remain weak across the sector: strategic assessment of AI’s impact, clear trustee or committee responsibility for AI, and organisation‑wide training and compliance. This makes it harder to embed AI safely into management and decision‑making.

How does the public feel about charities using AI?

Public attitudes to charity sector AI are cautious rather than hostile. Around a third of people feel positive, a quarter feel negative, and the remainder are unsure. Trust depends heavily on how AI is used and for what purpose.

When is public trust in charity AI use highest and lowest?

Trust is strongest when AI is used to protect funds, detect fraud or improve back‑office efficiency. It drops sharply when AI is seen to influence decisions about who receives support, replace human judgement, or use sensitive personal data without transparency.

Why does AI‑generated imagery pose a specific trust risk?

Both charities and the public express unease about AI‑generated imagery. Many charities have tried it and then stopped due to ethical, reputational or authenticity concerns. Public acceptance is lower where images appear emotionally manipulative or are used in sensitive contexts.

What support do charities say they need most in using AI?

Charities consistently ask for plain‑English training, practical guidance, ready‑to‑use policies and templates, and funding or time to build capacity. Confidence is more likely to come from support and clarity than pressure to move faster.

Register Now
We are very grateful to the organisations below for the funding and pro bono support they generously provide.

gsr-foundation-logo

With 50,000 members, growing by 3,000 a month, we are the largest and fastest growing UK charity community.

View our Infographic

Charity Excellence Framework CIO

14 Blackmore Gate
Buckland
Buckinghamshire
United Kingdom
HP22 5JT
charity number: 1195568
Copyright 2016-2026 All Rights Reserved by Charity Excellence Framework
Terms & ConditionsPrivacy Statement
Website by DJMWeb.co
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram